18 research outputs found

    Text-aided object segmentation and classification in images

    Get PDF
    Object recognition in images is a popular research field with many applications including medicine, robotics and face recognition. The task of automatically finding and identifying objects in an image is challenging in the extreme. By looking at the problem from a new angle and including additional information beside the visual, the problem becomes less ill posed. In this thesis we investigate how the addition of text annotations to images affects the classification process. Classifications of different sets of labels as well as clusters of labels were carried out. A comparison between the results from using only visual information and from also including information from an image description is given. In most cases the additional information improved the accuracy of the classification. The obtained results were then used to design an algorithm that could, given an image with a description, find relevant words from the text and mark their presence in the image. A large set of overlapping segments is generated and each segment is classified into a set of categories. The image descriptions are parsed by an algorithm (a so called chunker) and visually relevant words (key-nouns) are extracted from the text. These key-nouns are then connected to the categories by metrics from WordNet. To create an optimal assignment of the visual segments to the key-nouns combinatorial optimization was used. The resulting system was compared to manually segmented and classified images. The results are promising and have given rise to several new ideas for continued research

    Visual Entity Linking: A Preliminary Study

    Get PDF
    In this paper, we describe a system that jointly extracts entities appearing in images and mentioned in their ac- companying captions. As input, the entity linking pro- gram takes a segmented image together with its cap- tion. It consists of a sequence of processing steps: part- of-speech tagging, dependency parsing, and coreference resolution that enables us to identify the entities as well as possible textual relations from the captions. The pro- gram uses the image regions labelled with a set of pre- defined categories and computes WordNet similarities between these labels and the entity names. Finally, the program links the entities it detected across the text and the images. We applied our system on the Segmented and Annotated IAPR TC-12 dataset that we enriched with entity annotations and we obtained a correct as- signment rate of 55.48

    Interactive Online Machine Learning

    No full text
    With the Internet of Things paradigm, the data generated by the rapidly increasing number of connected devices lead to new possibilities, such as using machine learning for activity recognition in smart environments. However, it also introduces several challenges. The sensors of different devices might be mobile and of different types, i.e. there is a need to handle streaming data from a dynamic and heterogeneous set of sensors. In machine learning, the performance is often linked to the availability and quality of annotated data. Annotating data is in general costly, but it can be even more challenging if there is not any, or a very small amount of, annotated data to train the model on at the start of learning. To handle these issues, we implement interactive and adaptive systems. By including human-in-the-loop, which we refer to as interactive machine learning, the input from users can be utilized to build the model. The type of input used in interactive machine learning is typically annotations of the data, i.e. correctly labelled data points. Generally, it is assumed that the user always provides correct labels in accordance with the chosen interactive learning strategy. In many real-world applications these assumptions are not realistic however, as users might provide incorrect labels or not provide labels at all in line with the chosen strategy. In this thesis we explore which interactive learning strategy types are possible in the given scenario and how they affect performance, as well as the effect of machine learning algorithms on the performance. We also study how a user who is not always reliable, i.e. who does not always provide a correct label when expected to, can affect performance. We propose a taxonomy of interactive online machine learning strategies and test how the different strategies affect performance through experiments on multiple datasets. Simulated experiments are compared to experiments with human participants, to verify the results. The findings show that the overall best performing interactive learning strategy is one where the user provides labels when current estimations are incorrect, but that the best performing machine learning algorithm depends on the problem scenario. The experiments also show that a decreased reliability of the user leads to decreased performance, especially when there is a limited amount of labelled data. The robustness of the machine learning algorithms differs, where e.g. Naïve Bayes classifier is better at handling a lower reliability of the user. We also present a systematic literature review on machine teaching, a subfield of interactive machine learning where the human is proactive in the interaction. The study shows that the area of machine teaching is rapidly evolving with an increased number of publications in recent years. However, as it is still maturing, there exists several open challenges that would benefit from further exploration, e.g. how human factors can affect performance.In reference to IEEE copyrighted material which is used with permission in this thesis, the IEEE does not endorse any of [university/educational entity's name goes here]'s products or services. Internal or personal use of this material is permitted.</p

    Approaches to Interactive Online Machine Learning

    No full text
    With the Internet of Things paradigm, the data generated by the rapidly increasing number of connected devices lead to new possibilities, such as using machine learning for activity recognition in smart environments. However, it also introduces several challenges. The sensors of different devices might be of different types, making the fusion of data non-trivial. Moreover, the devices are often mobile, resulting in that data from a particular sensor is not always available, i.e. there is a need to handle data from a dynamic set of sensors. From a machine learning perspective, the data from the sensors arrives in a streaming fashion, i.e., online learning, as compared to many learning problems where a static dataset is assumed. Machine learning is in many cases a good approach for classification problems, but the performance is often linked to the quality of the data. Having a good data set to train a model can be an issue in general, due to the often costly process of annotating the data. With dynamic and heterogeneous data, annotation can be even more problematic, because of the ever-changing environment. This means that there might not be any, or a very small amount of, annotated data to train the model on at the start of learning, often referred to as the cold start problem. To be able to handle these issues, adaptive systems are needed. With adaptive we mean that the model is not static over time, but is updated if there for instance is a change in the environment. By including human-in-the-loop during the learning process, which we refer to as interactive machine learning, the input from users can be utilized to build the model. The type of input used is typically annotations of the data, i.e. user input in the form of correctly labelled data points. Generally, it is assumed that the user always provides correct labels in accordance with the chosen interactive learning strategy. In many real-world applications these assumptions are not realistic however, as users might provide incorrect labels or not provide labels at all in line with the chosen strategy. In this thesis we explore which interactive learning strategies are possible in the given scenario and how they affect performance, as well as the effect of machine learning algorithms on performance. We also study how a user who is not always reliable, i.e. that does not always provide a correct label when expected to, can affect performance. We propose a taxonomy of interactive online machine learning strategies and test how the different strategies affect performance through experiments on multiple datasets. The findings show that the overall best performing interactive learning strategy is one where the user provides labels when previous estimations have been incorrect, but that the best performing machine learning algorithm depends on the problem scenario. The experiments also show that a decreased reliability of the user leads to decreased performance, especially when there is a limited amount of labelled data

    Activity Recognition through Interactive Machine Learning in a Dynamic Sensor Setting

    No full text
    The advances in Internet of things lead to an increased number of devices generating and streaming data. These devices can be useful data sources for activity recognition by using machine learning. However, the set of available sensors may vary over time, e.g. due to mobility of the sensors and technical failures. Since the machine learning model uses the data streams from the sensors as input, it must be able to handle a varying number of input variables, i.e. that the feature space might change over time. Moreover, the labelled data necessary for the training is often costly to acquire. In active learning, the model is given a budget for requesting labels from an oracle, and aims to maximize accuracy by careful selection of what data instances to label. It is generally assumed that the role of the oracle only is to respond to queries and that it will always do so. In many real-world scenarios however, the oracle is a human user and the assumptions are simplifications that might not give a proper depiction of the setting. In this work we investigate different interactive machine learning strategies, out of which active learning is one, which explore the effects of an oracle that can be more proactive and factors that might influence a user to provide or withhold labels. We implement five interactive machine learning strategies as well as hybrid versions of them and evaluate them on two datasets. The results show that a more proactive user can improve the performance, especially when the user is influenced by the accuracy of earlier predictions. The experiments also highlight challenges related to evaluating performance when the set of classes is changing over time.Correction available: https://doi.org/10.1007/s00779-020-01465-5</p

    Human Factors in Interactive Online Machine Learning

    No full text
    Interactive machine learning (ML) adds a human-in-the-loop aspect to a ML system. Even though the input from human users to the system is a central part of the concept, the uncertainty caused by the human feedback is often not considered in interactive ML. The assumption that the human user is expected to always provide correct feedback, typically does not hold in real-world scenarios. This is especially important for when the cognitive workload of the human is high, for instance in online learning from streaming data where there are time constraints for providing the feedback. We present experiments of interactive online ML with human participants, and compare the results to simulated experiments where humans are always correct. We found combining the two interactive learning paradigms, active learning and machine teaching, resulted in better performance compared to machine teaching alone. The results also showed an increased discrepancy between the experiments with human participants and the simulated experiments when the cognitive workload was increased. The findings suggest the importance of taking uncertainty caused by human factors into consideration in interactive ML, especially in situations which requires a high cognitive workload for the human

    The Effects of Reluctant and Fallible Users in Interactive Online Machine Learning

    No full text
    In interactive machine learning it is important to select the most informative data instances to label in order to minimize the effort of the human user. There are basically two categories of interactive machine learning. In the first category, active learning, it is the computational learner that selects which data to be labelled by the human user, whereas in the second one, machine teaching, the selection is done by the human teacher. It is often assumed that the human user is a perfect oracle, i.e., a label will always be provided in accordance with the interactive learning strategy and that this label will always be correct. In real-world scenarios however, these assumptions typically do not hold. In this work, we investigate how the reliability of the user providing labels affects the performance of online machine learning. Specifically, we study reluctance, i.e., to what extent the user does not provide labels in accordance with the strategy, and fallibility, i.e., to what extent the provided labels are incorrect. We show results of experiments on a benchmark dataset as well as a synthetically created dataset. By varying the degree of reluctance and fallibility of the user, the robustness of the different interactive learning strategies and machine learning algorithms is explored. The experiments show that there is a varying robustness of the strategies and algorithms. Moreover, certain machine learning algorithms are more robust towards reluctance compared to fallibility, while the opposite is true for other
    corecore